skip to main content


Search for: All records

Creators/Authors contains: "Russell, Alexander"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Nakamoto’s longest-chain consensus paradigm now powers the bulk of the world’s cryptocurrencies and distributed finance infrastructure. An emblematic property of longest-chain consensus is that it provides probabilistic settlement guarantees that strengthen over time. This makes the exact relationship between settlement error and settlement latency a critical aspect of the protocol that both users and system designers must understand to make informed decisions. A recent line of work has finally provided a satisfactory rigorous accounting of this relationship for proof-of-work longest-chain protocols, but those techniques do not appear to carry over to the proof-of-stake setting. This article develops a new analytic approach for establishing such settlement guarantees that yields explicit, rigorous settlement bounds for proof-of-stake longest-chain protocols, placing them on equal footing with their proof-of-work counterparts. Our techniques apply with some adaptations to the proof-of-work setting where they provide improvements to the state-of-the-art settlement bounds for proof-of-work protocols. 
    more » « less
    Free, publicly-accessible full text available August 24, 2024
  2. Risk-limiting audits (RLAs) are rigorous statistical procedures meant to detect invalid election results. RLAs examine paper ballots cast during the election to statistically assess the possibility of a disagreement between the winner determined by the ballots and the winner reported by tabulation. The design of an RLA must balance risk against efficiency: "risk" refers to a bound on the chance that the audit fails to detect such a disagreement when one occurs; "efficiency" refers to the total effort to conduct the audit. The most efficient approaches—when measured in terms of the number of ballots that must be inspected—proceed by "ballot comparison." However, ballot comparison requires an (untrusted) declaration of the contents of each cast ballot, rather than a simple tabulation of vote totals. This "cast-vote record table" (CVR) is then spot-checked against ballots for consistency. In many practical settings, the cost of generating a suitable CVR dominates the cost of conducting the audit which has prevented widespread adoption of these sample-efficient techniques. We introduce a new RLA procedure: an "adaptive ballot comparison" audit. In this audit, a global CVR is never produced; instead, a three-stage procedure is iterated: 1) a batch is selected, 2) a CVR is produced for that batch, and 3) a ballot within the batch is sampled, inspected by auditors, and compared with the CVR. We prove that such an audit can achieve risk commensurate with standard comparison audits while generating a fraction of the CVR. We present three main contributions: (1) a formal adversarial model for RLAs; (2) definition and analysis of an adaptive audit procedure with rigorous risk limits and an associated correctness analysis accounting for the incidental errors arising in typical audits; and (3) an analysis of efficiency. 
    more » « less
    Free, publicly-accessible full text available May 23, 2024
  3. Free, publicly-accessible full text available May 1, 2024
  4. Nakamoto proof-of-work ledger consensus currently underlies the majority of deployed cryptocurrencies and smart-contract blockchains. While a long and fruitful line of work has succeeded to identify its exact security region---that is, the set of parametrizations under which it possesses asymptotic security---the existing theory does not provide concrete settlement time guarantees that are tight enough to inform practice. In this work we provide a new approach for obtaining concrete and practical settlement time guarantees suitable for reasoning about deployed systems. We give an efficient method for computing explicit upper bounds on settlement time as a function of primary system parameters: honest and adversarial computational power and a bound on network delays. We implement this computational method and provide a comprehensive sample of concrete bounds for several settings of interest. We also analyze a well-known attack strategy to provide lower bounds on the settlement times. For Bitcoin, for example, our upper and lower bounds are within 90 seconds of each other for 1-hour settlement assuming 10 second network delays and a 10% adversary. In comparison, the best prior result has a gap of 2 hours in the upper and lower bounds with the same parameters. 
    more » « less
  5. null (Ed.)
    Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications – key derivation from biometrics and physical unclonable functions – which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable. 
    more » « less
  6. null ; Pietrzak, K. (Ed.)
  7. Dietz, Karl-Josef (Ed.)
    Abstract Plants in dryland ecosystems experience extreme daily and seasonal fluctuations in light, temperature, and water availability. We used an in situ field experiment to uncover the effects of natural and reduced levels of ultraviolet radiation (UV) on maximum PSII quantum efficiency (Fv/Fm), relative abundance of photosynthetic pigments and antioxidants, and the transcriptome in the desiccation-tolerant desert moss Syntrichia caninervis. We tested the hypotheses that: (i) S. caninervis plants undergo sustained thermal quenching of light [non-photochemical quenching (NPQ)] while desiccated and after rehydration; (ii) a reduction of UV will result in improved recovery of Fv/Fm; but (iii) 1 year of UV removal will de-harden plants and increase vulnerability to UV damage, indicated by a reduction in Fv/Fm. All field-collected plants had extremely low Fv/Fm after initial rehydration but recovered over 8 d in lab-simulated winter conditions. UV-filtered plants had lower Fv/Fm during recovery, higher concentrations of photoprotective pigments and antioxidants such as zeaxanthin and tocopherols, and lower concentrations of neoxanthin and Chl b than plants exposed to near natural UV levels. Field-grown S. caninervis underwent sustained NPQ that took days to relax and for efficient photosynthesis to resume. Reduction of solar UV radiation adversely affected recovery of Fv/Fm following rehydration. 
    more » « less